Random Projections for Linear Support Vector Machines
نویسندگان
چکیده
منابع مشابه
Random Projections for Support Vector Machines
Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature ...
متن کاملLinear programming support vector machines
Based on the analysis of the conclusions in the statistical learning theory, especially the VC dimension of linear functions, linear programming support vector machines (or SVMs) are presented including linear programming linear and nonlinear SVMs. In linear programming SVMs, in order to improve the speed of the training time, the bound of the VC dimension is loosened properly. Simulation resul...
متن کاملLocally Linear Support Vector Machines
Linear support vector machines (svms) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their respon...
متن کاملNomograms for Visualizing Linear Support Vector Machines
Support vector machines are often considered to be black box learning algorithms. We show that for linear kernels it is possible to open this box and visually depict the content of the SVM classifier in high-dimensional space in the interactive format of a nomogram. We provide a crosscalibration method for obtaining probabilistic predictions from any SVM classifier, which control for the genera...
متن کاملDecomposition methods for linear support vector machines
In this paper, we show that decomposition methods with alpha seeding are extremely useful for solving a sequence of linear SVMs with more data than attributes. This strategy is motivated from (Keerthi and Lin 2003) which proved that for an SVM with data not linearly separable, after C is large enough, the dual solutions are at the same face. We explain why a direct use of decomposition methods ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM Transactions on Knowledge Discovery from Data
سال: 2014
ISSN: 1556-4681,1556-472X
DOI: 10.1145/2641760